翻訳と辞書
Words near each other
・ Generalized Hebbian Algorithm
・ Generalized helicoid
・ Generalized Helmholtz theorem
・ Generalized hypergeometric function
・ Generalized hyperhidrosis
・ Generalized integer gamma distribution
・ Generalized inverse
・ Generalized inverse Gaussian distribution
・ Generalized inversive congruential pseudorandom numbers
・ Generalized iterative scaling
・ Generalized Jacobian
・ Generalized Kac–Moody algebra
・ Generalized keyboard
・ Generalized Korteweg–de Vries equation
・ Generalized Lagrangian mean
Generalized least squares
・ Generalized lentiginosis
・ Generalized lifting
・ Generalized linear array model
・ Generalized linear mixed model
・ Generalized linear model
・ Generalized logistic distribution
・ Generalized Lotka–Volterra equation
・ Generalized lymphadenopathy
・ Generalized map
・ Generalized Maxwell model
・ Generalized mean
・ Generalized method of moments
・ Generalized minimal residual method
・ Generalized minimum-distance decoding


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Generalized least squares : ウィキペディア英語版
Generalized least squares

In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model. GLS can be used to perform linear regression when there is a certain degree of correlation between the explanatory variables (independent variables) of the regression. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1934.
== Method outline ==
In a typical linear regression model we observe data \_ on ''n'' statistical units. The response values are placed in a vector ''Y'' = (''y''1, ..., ''y''''n'')′, and the predictor values are placed in the design matrix ''X'' = ''x''''ij'', where ''x''''ij'' is the value of the ''j''th predictor variable for the ''i''th unit. The model assumes that the conditional mean of ''Y'' given ''X'' is a linear function of ''X'', whereas the conditional variance of the error term given ''X'' is a ''known'' matrix Ω. This is usually written as
:
Y = X\beta + \varepsilon, \qquad \mathrm()=0,\ \operatorname()=\Omega.

Here ''β'' is a vector of unknown “regression coefficients” that must be estimated from the data.
Suppose ''b'' is a candidate estimate for ''β''. Then the residual vector for ''b'' will be ''Y'' − ''Xb''. Generalized least squares method estimates ''β'' by minimizing the squared Mahalanobis length of this residual vector:
:
\hat\beta = \underset\,(Y-Xb)'\,\Omega^(Y-Xb),

Since the objective is a quadratic form in ''b'', the estimator has an explicit formula:
:
\hat\beta = (X'\Omega^X)^ X'\Omega^Y.


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Generalized least squares」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.